Direct training of subspace distribution clustering hidden Markov model
نویسندگان
چکیده
It generally takes a long time and requires a large amount of speech data to train hidden Markov models for a speech recognition task of a reasonably large vocabulary. Recently, we proposed a compact acoustic model called “subspace distribution clustering hidden Markov model” (SDCHMM) with an aim to save some of the training effort. SDCHMMs are derived from tying continuous density hidden Markov models (CDHMMs) at a finer subphonetic level, namely the subspace distributions. Experiments on the Airline Travel Information System (ATIS) task show that SDCHMMs with significantly fewer model parameters—by one to two orders of magnitude—can be converted from CDHMMs with no loss in word accuracy [1], [2]. With such compact acoustic models, one should be able to train SDCHMMs directly from significantly less speech data (without intermediate CDHMMs). In this paper, we devise a direct SDCHMM training algorithm, assuming an a priori knowledge of the subspace distribution tying structure. On the ATIS task, it is found that both a context-independent and a context-dependent speaker-independent 20-stream SDCHMM system trained with 8 min of speech perform as well as their corresponding CDHMM system trained with 105 min and 36 h of speech, respectively.
منابع مشابه
Subspace Distribution Clustering HMM for Chinese Digit Speech Recognition
As a kind of statistical method, the technique of Hidden Markov Model (HMM) is widely used for speech recognition. In order to train the HMM to be more effective with much less amount of data, the Subspace Distribution Clustering Hidden Markov Model (SDCHMM), derived from the Continuous Density Hidden Markov Model (CDHMM), is introduced. With parameter tying, a new method to train SDCHMMs is de...
متن کاملTraining of context-dependent subspace distribution clustering hidden Markov model
Training of continuous density hidden Markov models (CDHMMs) is usually time-consuming and tedious due to the large number of model parameters involved. Recently we proposed a new derivative of CDHMM, the subspace distribution clustering hidden Markov model (SDCHMM) which tie CDHMMs at the ner level of subspace distributions, resulting in many fewer model parameters. An SDCHMM training algorith...
متن کاملTraining of subspace distribution clustering hidden Markov model
In [2] and [7], we presented our novel subspace distribution clustering hiddenMarkovmodels (SDCHMMs)which can be converted from continuous density hidden Markov models (CDHMMs) by clustering subspaceGaussians in each stream over all models. Though such model conversion is simple and runs fast, it has two drawbacks: (1) it does not take advantage of the fewer model parameters in SDCHMMs — theore...
متن کاملMicrosoft Word - Hybridmodel2.dot
Today’s state-of-the-art speech recognition systems typically use continuous density hidden Markov models with mixture of Gaussian distributions. Such speech recognition systems have problems; they require too much memory to run, and are too slow for large vocabulary applications. Two approaches are proposed for the design of compact acoustic models, namely, subspace distribution clustering hid...
متن کاملAn Acoustic-Phonetic and a Model-Theoretic Analysis of Subspace Distribution Clustering Hidden Markov Models
Abstract. Recently, we proposed a new derivative to conventional continuous density hidden Markov modeling (CDHMM) that we call “subspace distribution clustering hidden Markov modeling” (SDCHMM). SDCHMMs can be created by tying low-dimensional subspace Gaussians in CDHMMs. In tasks we tried, usually only 32–256 subspace Gaussian prototypes were needed in SDCHMM-based system to maintain recognit...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- IEEE Trans. Speech and Audio Processing
دوره 9 شماره
صفحات -
تاریخ انتشار 2001